perf(benchmark): add executor-aware metrics, automated benchmarking pipeline, and performance report#1533
Conversation
…d comparison plots Signed-off-by: Ankit Basu <ankitbasu14@gmail.com>
|
Hi @neetance , great effort. I really appreciate. Thanks a lot 🙏 |
a4921ee to
72b7588
Compare
72b7588 to
d54ddfa
Compare
d54ddfa to
dec9b70
Compare
|
@AkramBitar @Effi-S , please, review this. Thanks 🙏 |
|
Hi @neetance , |
| #!/usr/bin/env python3 | ||
| """ | ||
| plot_benchmark_results.py: Generates comparison plots across executor | ||
| strategies (serial, unbounded, pool) for each parallel benchmark test. |
There was a problem hiding this comment.
Hi @neetance,
Nice work :)
Can we add a brief explanation what each strategy means?
| def _next_run(): | ||
| n = run_counter[0] | ||
| run_counter[0] += 1 | ||
| return n |
There was a problem hiding this comment.
Nice way to remove the global I that was used.
If you want to make this even cleaner, you can try:
from itertools import count
RUN_COUNTER = counter(0)
...
n = next(RUN_COUNTER)|
|
||
| # --- Unit conversion --- | ||
| timestamp_str = datetime.now().strftime("%Y-%m-%d_%H-%M-%S") | ||
| output_folder_name = f"benchmark_logs_{timestamp_str}" |
There was a problem hiding this comment.
We can shorten this to:
output_folder_name = f"benchmark_logs_{datetime.now():%F_%H-%M-%S}"| validator_benchmarks_folder = os.path.join(TOKENSDK_ROOT, "token/core/zkatdlog/nogh/v1/validator") | ||
| issuer_benchmarks_folder = os.path.join(TOKENSDK_ROOT, "token/core/zkatdlog/nogh/v1/issue") | ||
| validator_benchmarks_folder= os.path.join(TOKENSDK_ROOT, "token/core/zkatdlog/nogh/v1/validator") | ||
|
|
There was a problem hiding this comment.
pathlib can be used instead of os.path here
|
Hi @neetance , in the I would expect much more than the bounded, no? |
…mprove benchmarking scripts Signed-off-by: Ankit Basu <ankitbasu14@gmail.com>
dec9b70 to
bdfd597
Compare
|
Hi @adecaro, @Effi-S thanks for your reviews, I really appreciate the suggestions to improve and clean the code even further 🙏 I ran the benchmark again and this time we see a high number of goroutines: For serial and pool, the results are normal as expected:
I also applied the refactoring changes to make the code more cleaner as suggested by @Effi-S and ran the script again and everything is passing. Let me know if this looks good 😄 |
Summary
Extends the benchmarking framework to support executor-aware analysis and improves visibility into system behavior under different execution strategies.
Changes
run_benchmarks.py:
all three executor strategies for every parallel benchmark
so all strategies coexist in one CSV row
and stored as TestParallelBenchmarkSender[pool]/8 goroutines
plot_benchmark_results.py:
count annotations, coloured by executor strategy
runner.go:
GoRoutinesCreatedfield added to Result, captured as net delta ofruntime.NumGoroutine()across the recording windowGoroutines CreatedBenchmark Results
I ran the benchmark across the 3 different strategies with 10 workers to show the number of goroutines created and here are the results:
Serial
Pool
Unbounded
Comparison report
I have attached the pdf of the comparison report of the results of different benchmarks with different configurations
benchmark_results.pdf
Let me know if this is good 🙏